Goto

Collaborating Authors

 continuous-time bayesian network



Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a new variational method that allows for a closed-form calculation of this mixture marginal likelihood. We show the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.


Cluster Variational Approximations for Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian networks (CTBNs) constitute a general and powerful framework for modeling continuous-time stochastic processes on networks. This makes them particularly attractive for learning the directed structures among interacting entities. However, if the available data is incomplete, one needs to simulate the prohibitively complex CTBN dynamics. Existing approximation techniques, such as sampling and low-order variational methods, either scale unfavorably in system size, or are unsatisfactory in terms of accuracy. Inspired by recent advances in statistical physics, we present a new approximation scheme based on cluster-variational methods that significantly improves upon existing variational approximations. We can analytically marginalize the parameters of the approximate CTBN, as these are of secondary importance for structure learning. This recovers a scalable scheme for direct structure learning from incomplete and noisy time-series data. Our approach outperforms existing methods in terms of scalability.




Reviews: Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Summary: Within the manuscript, the authors extend the continuous time Bayesian Networks by incorporating a mixture prior over the conditional intensity matrices, thereby allowing for a larger class compared to a gamma prior usually employed over these. My main concerns are with clarity / quality as the manuscript is quite densely written with quite some material has either been omitted or shifted to the appendix. For a non-expert in continuous time bayesian networks, it is quite hard to read. Additionally, there are quite a few minor mistakes (see below) that make understanding of the manuscript harder. As it stands, Originality: The authors combine variational inference method from Linzner et al [11], with the new prior over the dependency structure (mixture).


Reviews: Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

This paper contributes a new technique for the estimation of structure in continuous time Bayesian networks, and completes the picture with an accompanying inference method and an illustration on a real-world problem. There is agreement among reviewers that this is a high quality contribution, if one takes the confidence-weighted scores from reviewers into account. As a point for improvement for the paper, we could reiterate a comment that was raised in the reviewer discussion: "[the paper] is missing reasonable and helpful experimental comparisons that are not hard to do, given that the code exists already in CTBN-RLE" and the authors are encouraged to consider broadening their experimental comparisons for a final published version.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning.


Reviews: Cluster Variational Approximations for Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

The paper introduces a generalization of previous variational methods for inference with jumps processes; here, the proposal approximating measure to the posterior relies on a star approximation. In application to continuous-time Bayesian networks, this means isolating clusters of nodes across children and parents, in order to build an efficient approximation to the traditional variational lower bound. The paper further presents examples and experiments that show how the proposed approach can be adapted to structure learning tasks in continuous-time settings. This is an interesting and topical contribution likely to appeal to the statistical and probabilistic community within NIPS. The paper is, in overall, well-written and reasonably well-structured. It offers a good background on previous work, helps the reader to understand its relevance and put its results in context within the existing literature.


Active Learning of Continuous-time Bayesian Networks through Interventions

Linzner, Dominik, Koeppl, Heinz

arXiv.org Machine Learning

We consider the problem of learning structures and parameters of Continuous-time Bayesian Networks (CTBNs) from time-course data under minimal experimental resources. In practice, the cost of generating experimental data poses a bottleneck, especially in the natural and social sciences. A popular approach to overcome this is Bayesian optimal experimental design (BOED). However, BOED becomes infeasible in high-dimensional settings, as it involves integration over all possible experimental outcomes. We propose a novel criterion for experimental design based on a variational approximation of the expected information gain. We show that for CTBNs, a semi-analytical expression for this criterion can be calculated for structure and parameter learning. By doing so, we can replace sampling over experimental outcomes by solving the CTBNs master-equation, for which scalable approximations exist. This alleviates the computational burden of sampling possible experimental outcomes in high-dimensions. We employ this framework in order to recommend interventional sequences. In this context, we extend the CTBN model to conditional CTBNs in order to incorporate interventions. We demonstrate the performance of our criterion on synthetic and real-world data.